94 research outputs found

    Formalizing and Verifying Workflows Used in Blood Banks

    Get PDF
    AbstractBlood banks use automation to decrease errors in delivering safe blood for transfusion. The Food and Drug Administration (FDA) of the United States and other organizations recommend blood banks to validate their computer system process, which is resource and labor intensive. For this reason, we have created a formal workflow model for blood bank operations and used an automated tool to verify that it satisfies safety properties. Our methodology started by understanding and gathering information about blood bank procedure. Then we mapped all procedures into processes in a workflow engine. Then we used the verification packages provided by the workflow engine to check the safety properties

    Formal Verification of Cyberphysical Systems

    Get PDF
    17 USC 105 interim-entered record; under review.Computer hosts a virtual roundtable with seven experts to discuss the formal specification and verification of cyberphysical systems.http://hdl.handle.net/10945/6944

    Money Laundering Detection Framework to Link the Disparate and Evolving Schemes

    Get PDF
    Money launderers hide traces of their transactions with the involvement of entities that participate in sophisticated schemes. Money laundering detection requires unraveling concealed connections among multiple but seemingly unrelated human money laundering networks, ties among actors of those schemes, and amounts of funds transferred among those entities. The link among small networks, either financial or social, is the primary factor that facilitates money laundering. Hence, the analysis of relations among money laundering networks is required to present the full structure of complex schemes. We propose a framework that uses sequence matching, case-based analysis, social network analysis, and complex event processing to detect money laundering. Our framework captures an ongoing single scheme as an event, and associations among such ongoing sequence of events to capture complex relationships among evolving money laundering schemes. The framework can detect associated multiple money laundering networks even in the absence of some evidence. We validated the accuracy of detecting evolving money laundering schemes using a multi-phases test methodology. Our test used data generated from real-life cases, and extrapolated to generate more data from real-life schemes generator that we implemented

    Money Laundering Detection Framework to Link the Disparate and Evolving Schemes

    Get PDF
    Money launderers hide traces of their transactions with the involvement of entities that participate in sophisticated schemes. Money laundering detection requires unraveling concealed connections among multiple but seemingly unrelated human money laundering networks, ties among actors of those schemes, and amounts of funds transferred among those entities. The link among small networks, either financial or social, is the primary factor that facilitates money laundering. Hence, the analysis of relations among money laundering networks is required to present the full structure of complex schemes. We propose a framework that uses sequence matching, case-based analysis, social network analysis, and complex event processing to detect money laundering. Our framework captures an ongoing single scheme as an event, and associations among such ongoing sequence of events to capture complex relationships among evolving money laundering schemes. The framework can detect associated multiple money laundering networks even in the absence of some evidence. We validated the accuracy of detecting evolving money laundering schemes using a multi-phases test methodology. Our test used data generated from real-life cases, and extrapolated to generate more data from real-life schemes generator that we implemented. Keywords: Anti Money Laundering, Social Network Analysis, Complex Event Processin

    A Framework to Reveal Clandestine Organ Trafficking in the Dark Web and Beyond

    Get PDF
    Due to the scarcity of transplantable organs, patients have to wait on long lists for many years to get a matching kidney. This scarcity has created an illicit market place for wealthy recipients to avoid long waiting times. Brokers arrange such organ transplants and collect most of the payment that is sometimes channeled to fund other illicit activities. In order to collect and disburse payments, they often resort to money laundering-like schemes of money transfers. As the low-cost Internet arrives in some of the affected countries, social media and the dark web are used to illegally trade human organs. This paper presents a model to assess the risk of human organ trafficking in specific areas and shows methods and tools to discover digital traces of organ trafficking using publicly available tools

    Optimizing Lawful Responses to Cyber Intrusions

    Get PDF
    Cyber intrusions are rarely met with the most effective possible response, less for technical than legal reasons. Different rogue actors (terrorists, criminals, spies, etc.) are governed by overlapping but separate domestic and international legal regimes. Each of these regimes has unique limitations, but also offers unique opportunities for evidence collection, intelligence gathering, and use of force. We propose a framework which automates the mechanistic aspects of the decision-making process, with human intervention for only those legal judgments that necessitate human judgment and official responsibility. The basis of our framework is a pair of decision trees, one executable solely by the threatened system, the other by the attorneys responsible for the lawful pursuit of the intruders. These parallel decision trees are interconnected, and contain pre-distilled legal resources for making an objective, principled determination at each decision point. We offer an open-source development strategy for realizing and maintaining the framework

    Quality of Service for Continuous Media Metrics, Validation, Implementation and Performance Evaluation

    No full text
    Multimedia, delivered for human consumption consist of a collection of media streams delivered in a cohesive and comprehensible manner. A methodology for designing multimedia systems modeling some aspect of the real world, designing Quality of Service (QoS) metrics to measure its quality, performing user studies to validate them, evaluating existing systems on validated metrics, and if found deficient, improving them or redesigning new systems. Choosing to model the aspect of the real world as lossy continuous media, this dissertation consists developing metrics to model intra-stream continuity and interstream synchronization of lossy continuous media streams, validating them through user experiments, and evaluating the Berkeley Continuous Media Toolkit (GMT) on proposed metrics, and designing improvements to CMT to make it adhere to programmer specified QoS metrics. Proposed metrics specify continuity and synchronization, with tolerable limits on average and bursty defaults from perfect continuity, timing and synchronization constraints. Continuity specification of a CM stream consists of its sequencing, display rate and drift profiles. The sequencing profile of a CM stream consists of tolerable aggregate and consecutive frame miss ratios. Rate profiles specify the average rendition rate and its variation. Given a rate profile, the ideal time unit for frame display is determined as an offset from the beginning of the stream. Drift profile specifies the average and bursty deviation of schedules for frames from such fixed points in time. Synchronization requirements of a collection of CM streams are specified by mixing, rate and synchronization drift profiles. Mixing profiles specify vectors of frames that can be displayed simultaneously. They consist of average and bursty losses of synchronization. Rate profiles consist of average rates and permissible deviations thereof. Synchronization drift profiles specify permissible aggregate and bursty time drifts between schedules of simultaneously displayable frames. It is shown that rate profiles of a collection of synchronized streams is definable in terms of rate profiles of its component streams. It is also shown that mixing and drift profiles of a collection of streams are non-definable in terms of sequencing and drift profiles of its constituents. An important consequence of the mutual independence of synchronization and continuity specification is that, in a general purpose platform with limited resources, synchronized display of CM streams may require QoS tradeoffs. An algori thm that makes such tradeoffs is presented as a proof of applicability of our metrics in a realistic environment. The proposed metrics were validated by means of a user survey. It consisted of presenting user with a series of professionally edited CM clips with controlled defects and obtain their opinion by means of Likert and imperceptible/tolerable/annoying scale. Viewer discontent for aggregate video losses gradually increases with the amount of loss. We concluded that 17 /100 to 23/100 average video losses are tolerated, and above 23/100 is unacceptable. Furthermore, as observed, a consecutive video loss of about two video frames in 100 does not cause user dissatisfaction. Although losing two consecutive video frames is noticed by most users, once this threshold is reached there is not much room for quality degradation due to consecutive losses. This figure for audio is 3 frames. Our results indicate that even a 20% rate variation in a newscast type video does not result is significant user dissatisfaction. The situation of audio rate variations are much more different. Even about 5% rate variation in audio is noticed by most observers. Our results also indicate that at aggregate audio-video synchronization loss of about 20/lO0n human tolerance plateaus out. This figure is about 3 frames for consecutive audio-video synchronization loss. For the performance evaluation part, losses and timing drift in stream continuity and synchronization were measured in the presence of processor and network loads. For stream continuity it was observed that increasing loads at lower frame rates significantly increases the aggregate frame drops, and similarly at higher rates, increasing loads at higher frame rates significantly increases consecutive frame drops. Because at a higher rates a large number of consecutive frames are dropped, the ones that are played appear in a more timely manner. For synchronization losses, it was shown that according to Steinmetz' metric CMT provides imperceptible audio-video mis-synchronization for about 10 seconds, and tolerable synchronization for about 13 seconds from the start of the clips for local clients under low processor loads. It is also shown that under high loads, synchronization is achieved at the cost of losing media frames. In order to control continuity and synchronization losses we propose four solutions. Firstly, to control losses, objects processing CM streams in CMT to be made QoS aware, in the sense that delaying vs. dropping frames be based on user specified QoS parameters. Secondly, in order to achieve QoS based synchronization a new paradigm Stream Groups, where grouped objects simultaneously fetch, transport and render corresponding streams from respective constituent streams. Thirdly, to introduce buffers at client sites, where servers can fill them up and clients run independent of servers. Fourthly, to have a feed-back mechanism from clients to control their respective servers

    Providing Secure Communication Services on the Public Telephone Network Infrastructures

    Get PDF
    Proceedings of the Tenth International Conference on Telecommunication Systems Modeling and Analysis, October 3-6, 2002 Monterey, CaliforniaThe public telephone network has been evolving from manually switched wires carrying analog encoded voice of the 19th century to an automatically switched grid of copper-wired, fiber optical and radio linked portions carrying digitally encoded voice and other data. Simultaneously, as our security consciousness increases, so does our desire to keep our conversations private. Applied to the traffic traversing the globe on the public telephone network, privacy requires that our telephone companies provide us with a service whereby unintended third parties are unable to access other's data. However, existing public telephone network infrastructures do not provide such a service. This paper shows a method to enhance the PSTN call processing model to provide end-to- end voice privacy and access control services within the boundaries of the existing public telephone network infrastructures. Proposed enhancement uses public and symmetric key cryptography. This work is a part of an on going project on securing telecommunication system architectures and protocols.NSFCenter for Secure lnfonnation Systems at GMUProf. S. JajodiaCCR·0013351
    corecore